Topological data analysis (TDA) is an expanding field that leverages principles and tools from algebraic topology to quantify structural features of data sets or transform them into more manageable forms. As its theoretical foundations have been developed, TDA has shown promise in extracting useful information from high-dimensional, noisy, and complex data such as those used in biomedicine. To operate efficiently, these techniques may employ landmark samplers, either random or heuristic. The heuristic maxmin procedure obtains a roughly even distribution of sample points by implicitly constructing a cover comprising sets of uniform radius. However, issues arise with data that vary in density or include points with multiplicities, as are common in biomedicine. We propose an analogous procedure, "lastfirst" based on ranked distances, which implies a cover comprising sets of uniform cardinality. We first rigorously define the procedure and prove that it obtains landmarks with desired properties. We then perform benchmark tests and compare its performance to that of maxmin, on feature detection and class prediction tasks involving simulated and real-world biomedical data. Lastfirst is more general than maxmin in that it can be applied to any data on which arbitrary (and not necessarily symmetric) pairwise distances can be computed. Lastfirst is more computationally costly, but our implementation scales at the same rate as maxmin. We find that lastfirst achieves comparable performance on prediction tasks and outperforms maxmin on homology detection tasks. Where the numerical values of similarity measures are not meaningful, as in many biomedical contexts, lastfirst sampling may also improve interpretability.
translated by 谷歌翻译
In the past few years, Artificial Intelligence (AI) has garnered attention from various industries including financial services (FS). AI has made a positive impact in financial services by enhancing productivity and improving risk management. While AI can offer efficient solutions, it has the potential to bring unintended consequences. One such consequence is the pronounced effect of AI-related unfairness and attendant fairness-related harms. These fairness-related harms could involve differential treatment of individuals; for example, unfairly denying a loan to certain individuals or groups of individuals. In this paper, we focus on identifying and mitigating individual unfairness and leveraging some of the recently published techniques in this domain, especially as applicable to the credit adjudication use case. We also investigate the extent to which techniques for achieving individual fairness are effective at achieving group fairness. Our main contribution in this work is functionalizing a two-step training process which involves learning a fair similarity metric from a group sense using a small portion of the raw data and training an individually "fair" classifier using the rest of the data where the sensitive features are excluded. The key characteristic of this two-step technique is related to its flexibility, i.e., the fair metric obtained in the first step can be used with any other individual fairness algorithms in the second step. Furthermore, we developed a second metric (distinct from the fair similarity metric) to determine how fairly a model is treating similar individuals. We use this metric to compare a "fair" model against its baseline model in terms of their individual fairness value. Finally, some experimental results corresponding to the individual unfairness mitigation techniques are presented.
translated by 谷歌翻译
The inception of large language models has helped advance state-of-the-art performance on numerous natural language tasks. This has also opened the door for the development of foundation models for other domains and data modalities such as images, code, and music. In this paper, we argue that business process data representations have unique characteristics that warrant the development of a new class of foundation models to handle tasks like process mining, optimization, and decision making. These models should also tackle the unique challenges of applying AI to business processes which include data scarcity, multi-modal representations, domain specific terminology, and privacy concerns.
translated by 谷歌翻译
时间序列预测是一项强大的数据建模学科,可以分析历史观察以预测时间序列的未来价值。它已用于许多应用程序,包括但不限于经济学,气象和健康。在本文中,我们使用时间序列预测技术来建模和预测水痘的未来发生率。为了实现这一目标,我们在匈牙利收集的数据集上实现并模拟了多个模型和数据预处理技术。我们证明,在县级预测方面,LSTM模型在绝大多数实验中的所有其他模型都优于所有其他模型,而Sarimax模型在国家一级表现最佳。我们还证明,传统数据预处理方法的性能不如我们提出的数据预处理方法的性能。
translated by 谷歌翻译
集体感知是群体机器人技术中的基本问题,在该机器人技术中,群体必须就环境的连贯代表达成共识。集体感知的一个重要变体将其视为最佳决策过程,在该过程中,群体必须从一组替代方案中确定最有可能的代表。过去对这种变体的工作主要集中在表征不同的算法如何在群体必须决定最频繁的环境特征的情况下如何导航速度-VS-Accuracy折衷。至关重要的是,过去在最佳决策中的工作使机器人传感器是完美的(无噪声和故障),从而限制了这些算法的现实适用性。在本文中,我们从第一个原理中得出了一个最佳的,概率的框架,用于配备有缺陷的传感器的简约群机器人。然后,我们在群体共同决定某个环境特征的频率的情况下验证了我们的方法。我们研究了有关几个感兴趣的参数的决策过程的速度和准确性。即使存在严重的感觉噪声,我们的方法也可以提供及时,准确的频率估计。
translated by 谷歌翻译
时间序列的异常提供了各个行业的关键方案的见解,从银行和航空航天到信息技术,安全和医学。但是,由于异常的定义,经常缺乏标签以及此类数据中存在的极为复杂的时间相关性,因此识别时间序列数据中的异常尤其具有挑战性。LSTM自动编码器是基于长期短期内存网络的异常检测的编码器传统方案,该方案学会重建时间序列行为,然后使用重建错误来识别异常。我们将Denoising Architecture作为对该LSTM编码模型模型的补充,并研究其对现实世界以及人为生成的数据集的影响。我们证明了所提出的体系结构既提高了准确性和训练速度,从而使LSTM自动编码器更有效地用于无监督的异常检测任务。
translated by 谷歌翻译
抗微生物抗性(AMR)是日益增长的公共卫生威胁,估计每年造成超过1000万人死亡,在现状预测下,到2050年,全球经济损失了100万亿美元。这些损失主要是由于治疗失败的发病率和死亡率增加,医疗程序中的AMR感染以及归因于AMR的生活质量损失所致。已经提出了许多干预措施来控制AMR的发展并减轻其传播带来的风险。本文回顾了细菌AMR管理和控制的关键方面,这些方面可以利用人工智能,机器学习以及数学和统计建模等数据技术,这些领域在本世纪已经快速发展。尽管数据技术已成为生物医学研究的组成部分,但它们对AMR管理的影响仍然很小。我们概述了使用数据技术来打击AMR,详细介绍了四个互补类别的最新进展:监视,预防,诊断和治疗。我们在生物医学研究,临床实践和“一个健康”背景下使用数据技术提供了有关当前AMR控制方法的概述。我们讨论了数据技术的潜在影响和挑战在高收入和中等收入国家中面临的实施,并建议将这些技术更容易地整合到医疗保健和公共卫生中所需的具体行动,并建议使用具体的行动部门。
translated by 谷歌翻译